18 research outputs found
NeuGuard: Lightweight Neuron-Guided Defense against Membership Inference Attacks
Membership inference attacks (MIAs) against machine learning models can lead
to serious privacy risks for the training dataset used in the model training.
In this paper, we propose a novel and effective Neuron-Guided Defense method
named NeuGuard against membership inference attacks (MIAs). We identify a key
weakness in existing defense mechanisms against MIAs wherein they cannot
simultaneously defend against two commonly used neural network based MIAs,
indicating that these two attacks should be separately evaluated to assure the
defense effectiveness. We propose NeuGuard, a new defense approach that jointly
controls the output and inner neurons' activation with the object to guide the
model output of training set and testing set to have close distributions.
NeuGuard consists of class-wise variance minimization targeting restricting the
final output neurons and layer-wise balanced output control aiming to constrain
the inner neurons in each layer. We evaluate NeuGuard and compare it with
state-of-the-art defenses against two neural network based MIAs, five strongest
metric based MIAs including the newly proposed label-only MIA on three
benchmark datasets. Results show that NeuGuard outperforms the state-of-the-art
defenses by offering much improved utility-privacy trade-off, generality, and
overhead
Spectral-DP: Differentially Private Deep Learning through Spectral Perturbation and Filtering
Differential privacy is a widely accepted measure of privacy in the context
of deep learning algorithms, and achieving it relies on a noisy training
approach known as differentially private stochastic gradient descent (DP-SGD).
DP-SGD requires direct noise addition to every gradient in a dense neural
network, the privacy is achieved at a significant utility cost. In this work,
we present Spectral-DP, a new differentially private learning approach which
combines gradient perturbation in the spectral domain with spectral filtering
to achieve a desired privacy guarantee with a lower noise scale and thus better
utility. We develop differentially private deep learning methods based on
Spectral-DP for architectures that contain both convolution and fully connected
layers. In particular, for fully connected layers, we combine a block-circulant
based spatial restructuring with Spectral-DP to achieve better utility. Through
comprehensive experiments, we study and provide guidelines to implement
Spectral-DP deep learning on benchmark datasets. In comparison with
state-of-the-art DP-SGD based approaches, Spectral-DP is shown to have
uniformly better utility performance in both training from scratch and transfer
learning settings.Comment: Accepted in 2023 IEEE Symposium on Security and Privacy (SP